在这项工作中,我们探讨了肺结核(TB)咳嗽分类的复发性神经网络体系结构。与以前在该领域实施深层体系结构的尝试不成功的尝试相反,我们表明基本的双向长期记忆网络(BILSTM)可以提高性能。此外,我们表明,通过与新提供的基于注意力的架构一起进行贪婪的特征选择,该体系结构学习患者不变特征,与基线和其他所考虑的架构相比,可以实现更好的概括。此外,这种注意机制允许检查被认为对进行分类很重要的音频信号的时间区域。最后,我们开发了一种神经风格转移技术来推断理想的输入,随后可以分析。我们发现结核病和非结核咳嗽的理想功率谱之间存在明显的差异,这些功率光谱为音频信号中特征的起源提供了线索。
translated by 谷歌翻译
我们提出了一个基于深度学习的自动咳嗽分类器,可以区分结核病(TB)与Covid-19咳嗽和健康咳嗽。 TB和Covid-19都是呼吸道疾病,具有传染性,咳嗽是一种主要的症状,每年夺走了数千人的生命。在室内和室外设置都收集了咳嗽的录音,并使用来自全球各地受试者的智能手机上传,因此包含各种噪声。该咳嗽数据包括1.68小时的结核病咳嗽,18.54分钟的咳嗽,咳嗽和1.69小时的健康咳嗽,47例TB患者,229例Covid-19患者和1498例健康患者,并用于培训和评估CNN,LSTM和Resnet505050 。这三个深度体系结构在2.14小时的打喷嚏,2.91小时的语音和2.79小时的噪音中也进行了预训练,以提高性能。通过使用SMOTE数据平衡技术并使用诸如F1得分和AUC之类的性能指标来解决我们数据集中的类不平衡。我们的研究表明,从预先训练的RESNET50中获得了最高的0.9259和0.8631的F1分数,两级(TB与CoVID-19)和三级(TB VS VS COVID-19与健康)的咳嗽分类,咳嗽分类,,咳嗽分类任务,三级(TB vs vs covid-19)分别。深度转移学习的应用改善了分类器的性能,并使它们更加坚固,因为它们在交叉验证折叠上更好地概括了。他们的表现超过了世界卫生组织(WHO)设定的结核病分类测试要求。产生最佳性能的功能包含MFCC的高阶,这表明人耳朵无法感知结核病和COVID-19之间的差异。这种类型的咳嗽音频分类是非接触,具有成本效益的,并且可以轻松地部署在智能手机上,因此它可以成为TB和COVID-19筛查的绝佳工具。
translated by 谷歌翻译
我们提出“唤醒咳嗽”,这是使用resnet50咳嗽到咳嗽的应用,并使用i-vectors识别咳嗽者,以实现长期的个性化咳嗽监测系统。咳嗽记录在一个安静(73 $ \ pm $ 5 dB)和嘈杂(34 $ \ pm $ 17 dB)环境中,用于提取I-向量,X-向量和D-向量,用作分类器的功能。当使用MLP使用2-SEC长咳嗽片段在嘈杂的环境中使用MLP区分51个咳嗽者时,该系统可以达到90.02 \%的精度。当在安静环境中使用更长(100秒)段的5和14个咳嗽者区分5至14个咳嗽者时,这种准确性分别提高到99.78%和98.39%。与语音不同,I-向量在识别咳嗽者方面的表现优于X-向量和D-向量。这些咳嗽是在Google语音命令数据集中添加的额外类,并通过在触发短语中保存端到端的时间域信息来提取功能。使用RESNET50在35个其他触发短语中发现咳嗽时,达到了88.58%的最高精度。因此,Wake咳嗽代表了一个个性化的,非侵入性的咳嗽监测系统,该系统的功率有效,因为在设备上的唤醒词检测可以使基于智能手机的监视设备大多处于休眠状态。这使伴尾咳嗽在多床病房环境中极具吸引力,以监测患者从肺部疾病(例如结核病(TB)和Covid-19)中的长期恢复。
translated by 谷歌翻译
We present a machine-learning framework to accurately characterize morphologies of Active Galactic Nucleus (AGN) host galaxies within $z<1$. We first use PSFGAN to decouple host galaxy light from the central point source, then we invoke the Galaxy Morphology Network (GaMorNet) to estimate whether the host galaxy is disk-dominated, bulge-dominated, or indeterminate. Using optical images from five bands of the HSC Wide Survey, we build models independently in three redshift bins: low $(0<z<0.25)$, medium $(0.25<z<0.5)$, and high $(0.5<z<1.0)$. By first training on a large number of simulated galaxies, then fine-tuning using far fewer classified real galaxies, our framework predicts the actual morphology for $\sim$ $60\%-70\%$ host galaxies from test sets, with a classification precision of $\sim$ $80\%-95\%$, depending on redshift bin. Specifically, our models achieve disk precision of $96\%/82\%/79\%$ and bulge precision of $90\%/90\%/80\%$ (for the 3 redshift bins), at thresholds corresponding to indeterminate fractions of $30\%/43\%/42\%$. The classification precision of our models has a noticeable dependency on host galaxy radius and magnitude. No strong dependency is observed on contrast ratio. Comparing classifications of real AGNs, our models agree well with traditional 2D fitting with GALFIT. The PSFGAN+GaMorNet framework does not depend on the choice of fitting functions or galaxy-related input parameters, runs orders of magnitude faster than GALFIT, and is easily generalizable via transfer learning, making it an ideal tool for studying AGN host galaxy morphology in forthcoming large imaging survey.
translated by 谷歌翻译
Wearable sensors for measuring head kinematics can be noisy due to imperfect interfaces with the body. Mouthguards are used to measure head kinematics during impacts in traumatic brain injury (TBI) studies, but deviations from reference kinematics can still occur due to potential looseness. In this study, deep learning is used to compensate for the imperfect interface and improve measurement accuracy. A set of one-dimensional convolutional neural network (1D-CNN) models was developed to denoise mouthguard kinematics measurements along three spatial axes of linear acceleration and angular velocity. The denoised kinematics had significantly reduced errors compared to reference kinematics, and reduced errors in brain injury criteria and tissue strain and strain rate calculated via finite element modeling. The 1D-CNN models were also tested on an on-field dataset of college football impacts and a post-mortem human subject dataset, with similar denoising effects observed. The models can be used to improve detection of head impacts and TBI risk evaluation, and potentially extended to other sensors measuring kinematics.
translated by 谷歌翻译
Deep convolutional neural networks (CNNs) have been widely used for medical image segmentation. In most studies, only the output layer is exploited to compute the final segmentation results and the hidden representations of the deep learned features have not been well understood. In this paper, we propose a prototype segmentation (ProtoSeg) method to compute a binary segmentation map based on deep features. We measure the segmentation abilities of the features by computing the Dice between the feature segmentation map and ground-truth, named as the segmentation ability score (SA score for short). The corresponding SA score can quantify the segmentation abilities of deep features in different layers and units to understand the deep neural networks for segmentation. In addition, our method can provide a mean SA score which can give a performance estimation of the output on the test images without ground-truth. Finally, we use the proposed ProtoSeg method to compute the segmentation map directly on input images to further understand the segmentation ability of each input image. Results are presented on segmenting tumors in brain MRI, lesions in skin images, COVID-related abnormality in CT images, prostate segmentation in abdominal MRI, and pancreatic mass segmentation in CT images. Our method can provide new insights for interpreting and explainable AI systems for medical image segmentation. Our code is available on: \url{https://github.com/shengfly/ProtoSeg}.
translated by 谷歌翻译
Soft labels in image classification are vector representations of an image's true classification. In this paper, we investigate soft labels in the context of satellite object detection. We propose using detections as the basis for a new dataset of soft labels. Much of the effort in creating a high-quality model is gathering and annotating the training data. If we could use a model to generate a dataset for us, we could not only rapidly create datasets, but also supplement existing open-source datasets. Using a subset of the xView dataset, we train a YOLOv5 model to detect cars, planes, and ships. We then use that model to generate soft labels for the second training set which we then train and compare to the original model. We show that soft labels can be used to train a model that is almost as accurate as a model trained on the original data.
translated by 谷歌翻译
The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we: - announce the first public release of the astroBERT language model; - show how astroBERT improves over existing public language models on astrophysics specific tasks; - and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.
translated by 谷歌翻译
Recent increases in computing power have enabled the numerical simulation of many complex flow problems that are of practical and strategic interest for naval applications. A noticeable area of advancement is the computation of turbulent, two-phase flows resulting from wave breaking and other multiphase flow processes such as cavitation that can generate underwater sound and entrain bubbles in ship wakes, among other effects. Although advanced flow solvers are sophisticated and are capable of simulating high Reynolds number flows on large numbers of grid points, challenges in data analysis remain. Specifically, there is a critical need to transform highly resolved flow fields described on fine grids at discrete time steps into physically resolved features for which the flow dynamics can be understood and utilized in naval applications. This paper presents our recent efforts in this field. In previous works, we developed a novel algorithm to track bubbles in breaking wave simulations and to interpret their dynamical behavior over time (Gao et al., 2021a). We also discovered a new physical mechanism driving bubble production within breaking wave crests (Gao et al., 2021b) and developed a model to relate bubble behaviors to underwater sound generation (Gao et al., 2021c). In this work, we applied our bubble tracking algorithm to the breaking waves simulations and investigated the bubble trajectories, bubble creation mechanisms, and bubble acoustics based on our previous works.
translated by 谷歌翻译
为计算机视觉标记大型示例数据集的挑战继续限制图像存储库的可用性和范围。这项研究为自动数据收集,策展,标签和迭代培训提供了一种新的方法,对螺头卫星图像和对象检测的情况进行最少的人为干预。新的操作量表有效地扫描了整个城市(68平方英里)的网格搜索,并通过太空观测得出了汽车颜色的预测。经过部分训练的Yolov5模型是一种初始推理种子,以进一步输出迭代循环中更精致的模型预测。这里的软标签是指接受标签噪声作为潜在的有价值的增强,以减少过度拟合并增强对以前看不见的测试数据的广义预测。该方法利用了一个现实世界的实例,其中汽车的裁剪图像可以自动从像素值中自动接收白色或彩色信息,从而完成端到端管道,而不会过度依赖人类劳动。
translated by 谷歌翻译